专利摘要:
The invention relates to a method for determining the light field of an object (20) by means of an image sensor (10) having an area sensor (11), wherein a plurality of single line images of a particular object line are recorded at different times (t1, ..., tm) and Angle to the surface normal for the single-line images is different, in which a sub-area comprising a number of lines (12a ... 12g), a surface sensor 11 is selected. According to the invention, a three-dimensional nxmx (p + m-1) light field data structure L (in, im, ip) is created for a light field of the object (20) whose entries are color or brightness values, respectively -n corresponds to the length of the rows (12), -m corresponds to the size, in number of rows, of a selected region (13), -to superior times (t1, ..., tm) with a time index (iT = 1, ... , m) sensor readings MiT (iP, iS) are respectively generated with the sensor pixels (14), and -the determined sensor measured values MiT (x (ip), iS) are entered into the light field data structure (L) according to the following rule: L (iS, iT, iP): = {MiT (x (iP), iS)} where iP = 1 ... p.
公开号:AT515318A4
申请号:T50072/2014
申请日:2014-01-31
公开日:2015-08-15
发明作者:
申请人:Ait Austrian Inst Technology;
IPC主号:
专利说明:

The invention relates to a method for determining the light field of an object by means of an image sensor having an area sensor according to the preamble of claim 1. The background of the invention is not just to image single objects two-dimensional, but to determine the light field emitted by these objects. The knowledge about the light field emitted by the respective object can be used for different purposes. It is thus possible to obtain a quasi-three-dimensional image of the object or to image an object in optimum clarity. In addition, of course, many other applications of the emitted from the subject motion light field are known.
Many different methods are known from the prior art for recording motion light fields. Thus, for example, multi-camera arrays are proposed which perform a variety of different shots of the same subject to subsequently determine a light field of the object. Various optical systems are also proposed that allow an object to be taken from several different angles. Thus, for example, by means of a plenoptic microlens array, a large number of images can be achieved from slightly offset pick-up points. For this purpose, an array of individual so-called microlenses is placed in the receiving area of the camera, whereby a two-dimensional image is obtained, which has a grid-like arranged number of partial areas, which indicate the light field of the respective object.
All these methods have significant disadvantages, in particular, the resolution of the images is extremely low and the data volumes of the created light fields is very large. Another disadvantage of the mentioned methods is that the number of different viewing angles is predetermined and can not be changed flexibly.
The object of the invention is to determine the so-called moving light field, which results from the relative movement between an object in the receiving area and an area sensor imaging this object, wherein the memory space used for imaging the light field is reduced compared to the prior art.
The invention solves this problem in a method of the type mentioned above with the characterizing feature of claim 1.
According to the invention, in a method for determining the light field of an object by means of an image sensor having an area sensor, the area sensor having a number of sensor pixels arranged in rows and columns, to which a row index and a column index are respectively allocated in accordance with their arrangement Object and the image recording unit perform a translational relative movement parallel to the direction of the columns of the area sensor, - wherein the object is moved on a distance to the image pickup unit with a predetermined distance object plane relative to the image pickup unit, wherein the object is moved through the receiving area of the image pickup unit, and - wherein several single-line images of a particular object line are recorded at different times and the angle to the surface normal for the single-line images is different in each case, wherein a subregion comprising a number of lines, in particular all lines, of the area sensor 11 is selected and the selected lines are each assigned a selection index x, wherein the selection index x is assigned in each case increasing to the lines with increasing line index, wherein the selection index x of the first selected The row having the lowest row index among the selected rows has the value 1 and the selection index x of the selected row with the highest row index among the selected rows corresponds to the value p of the number of selected rows, provided that for a three-dimensional array corresponding to the light field of the item nxmx (p + m - 1) light field data structure L (in, in, iP) is created, whose entries are each color or brightness values, - where n corresponds to the length of the lines, - where m is the size, in number of lines, corresponds to a selected region, - that at superior times with a time index (iT = 1, ..., m) sensor readings MiT (ip, is) are respectively created with the sensor pixels of the selected line sensors, wherein each sensor measured value MiT (ip, is) respectively the time index of its recording and the selection index x (iP) of the line and Column index of the sensor sensor detecting sensor pixel are assigned, and - that the detected sensor measured values MiT (x (ip), is) are entered in the light field data structure according to the following rule: L (is, iT, iP): = {MiT (x (iP), is)} where iP = 1 ... p
According to the invention, it is thus advantageously possible to determine the motion light fields of objects that pass through the receiving area of a sensor by means of a
Relative movement to be moved. The data determined by the recording and made available by the method according to the invention merely contain a three-dimensional data structure which, compared to the four-dimensional motion light fields of recorded objects, requires an amount of memory or memory space reduced by an order of magnitude.
An advantageous synchronization for recording precise motion light fields provides that the line spacing between each two adjacent recording times, at which sensor measured values are determined, is set to the shift in which the object has in each case traveled by the distance of each two adjacent observations in the transport direction ,
A preferred memory layout for recording and further processing the determined image data provides that the light field data structure is designed as a ring memory of size n × m × (p + m-1), wherein each entry in the ring memory is either a brightness value or a color value triple.
In order to obtain an image of the object having optimum image sharpness at each point, it may be provided that a distance value Δ is set as an integer having a value greater than 1, each Δth sensor line being selected and the sensor line having the Row index ίΡ = ο + χ * Δ the selection index is assigned, where o corresponds to an integer offset value.
In order to better understand the characteristics of a line sensor having a color filter coating according to the Bayer pattern, it may be provided that a color light field data structure L = (LR, LG, LB) (a surface sensor with color filter coating according to the Bayer patent). A separate light field data structure is generated separately for each color with each of the pixels available for this color, with individual entries remaining undefined in the absence of color and brightness information and subsequently for columns with alternating green / blue colors. Filter arrangement, missing color information after
is determined, and wherein in the case that the respective position is provided with a green filter, the proportion LR of the red light is determined as follows: 2, and
wherein, in the case that the respective position is provided with a blue filter, the proportion LR of the red light is determined as follows:
and for columns with alternating green / red filter arrangement, missing color information according to
is determined, and wherein in the case that the respective position is provided with a green filter, the proportion LB of the blue light is determined as follows:
and in the case that the respective position is provided with a red filter, the proportion LB of the blue light is determined as follows:
A preferred embodiment is illustrated in more detail with reference to the following figures. Fig. 1a shows a transport device and a transported on this transport device object and a receiving unit with a line sensor. FIG. 1b shows a surface sensor comprising a number of sensor pixels arranged in rows and columns in the form of a grid. Fig. 1c shows schematically the visual rays of the individual sensor pixels of the line sensor. Fig. 2a shows the recording of an object point at different times. In Fig. 2b is a schematic representation is shown, showing the recording of an object from different angles. FIGS. 3a to 3c show individual recording results carried out during the recording. FIGS. 4a to 4d show the light field data structure and the entry of the determined sensor measured values into the light field data structure at different times.
The illustrated in Fig. 1a preferred embodiment of the invention, uses a transport device 30 for transporting an article 20 through the receiving area of an image pickup unit 10. This image pickup unit 10 has a surface sensor 11, which has a number of grid-shaped lines 12a ... 12g and
Columns 13 arranged sensor pixels 14 includes (Fig. 1b). The sensor pixels 14 have according to their arrangement in the grid of the surface sensor 11 each have a row index iz, the respective row 12 in the grid and a column index is, which indicates the respective column 13 in the grid on.
The device shown in Fig. 1a comprises a conveyor unit 30, by means of which an object 20 is conveyed through the receiving area of the image pickup unit 10. As the object 20, a banknote or other printing unit is used in the present embodiment, which is formed substantially flat. The object 20 is moved in an object plane 21 relative to the image pickup unit at a predetermined distance d. In this case, the object 20 performs a translational relative movement relative to the image recording unit 10, wherein the object 20 is moved parallel to the direction of the columns of the surface sensor 11. The object plane 21 is in the present embodiment normal to the optical axis of the image pickup unit 10 and is spaced with respect to this optical axis relative to the image pickup unit 10 with a distance d.
In the present embodiment, the surface sensor 11, as shown in Fig. 1b, a plurality of grid-like arranged in rows 12 and columns 13 sensor pixels 14. As a result of the grid-shaped arrangement, each of the sensor pixels 14 can be assigned a row index lz and a column index ls. The number of columns 13 used in the present embodiment determines the resolution with which the object 20 is picked up normal to the direction of travel R. The number of lines 12a to 12g determines the number and location of the different viewing angles under which the object is taken in the course of its relative movement relative to the image recording unit.
In Fig. 1c, the position of the visual rays 16 of the individual sensor pixels is shown schematically. It can be seen here that the visual rays of the sensor pixels of each line, e.g. the line 12b, the object plane 21 respectively on parallel, normal to the transport direction R lines 22b intersect. Each line 12a, 12b, 12c of the area sensor 11 can thus be assigned to exactly one straight line 22a,..., 22c on the object plane 21 on which the respective points of intersection of the visual rays of the sensor pixels of the respective line lie with the object plane 21. These straight lines 22a, 22b, 22c are parallel to the respective adjacent straight lines 22a,..., 22c and each have the same distances D relative to one another.
If the object 20 is moved relative to the image acquisition unit 10 during its relative movement along the direction of travel R, it is imaged at different times t, t + 1, t + 2, respectively, from different sensor lines 1z at a different viewing angle α1, a2, etc. ,
With the procedure described below, it is achieved that, for each recorded object point of the object 20, in each case photographs are available which show it from different angles cd to a5 and thus produce a motion light field of the object 20.
With this procedure, it is possible to achieve an effect corresponding to the receiving device shown in FIG. 2b, each of the receiving devices 10 'shown in FIG. 2b each having a line sensor 11' which is normal to the direction of movement and to the viewing plane of FIG. 2b stands.
Shown in FIGS. 3 a, 3 b and 3 c are, respectively, images of an article 20 with a survey 200 located thereon. While the appearance of the body 201 of the object lying in the object plane 21 does not change over time, the image of the elevation 200 on the object 20 clearly shows differences due to the elevation 200 protruding from the object plane 21 , In particular, protruding side surfaces 202, 203 can be recognized in FIGS. 3 a and 3 c, which can not be recognized from a plan view of the elevation 200 shown in FIG. 3 b.
In a first step, a three-dimensional light field data structure corresponding to and containing the light field of the object 20 is created. This is a three-dimensional data field or array of size n × m × (p + m-1). The entries of the light field data structure L represent respectively color values or brightness values determined by the sensor pixels 14. The field length n corresponds to the length of the lines 12 or the number of columns. The field dimension m corresponds to the length of the columns 13 or the number of lines. The field dimension p is set to a predetermined value corresponding to a maximum time index, which in the present exemplary embodiment is increased by the number m of the lines 12 reduced by 1. In the light field data structure used in the present embodiment, n, m, and p have the values 2000, 9, and 9. That is, each item line is represented by p = 9 independent viewpoints, all m = 9 sensor lines will be used, and at each time point a memory of 2000 * 9 * 8 pixels is filled. If all
Views of an image with a total number of lines of e.g. 5000 lines, whereby p views of the object line have to be saved for each recording time, this results in 5000 * 2000 * 9 pixel readings.
The light field data structure L used here is shown schematically in FIG. 4a, wherein the third coordinate of the light field data structure represented by the time index iP is represented by stacking one another.
In the course of the relative movement of the object 20 relative to the image recording unit 10, a number of recordings are created, to each of which a time index iT is assigned, which in the present exemplary embodiment has a value between 1 and m. These recordings are made at predetermined times L,..., Tm, which in the present exemplary embodiment in each case have the same distances to the respectively following recording time. The sensor measured values each at the same instant t are stored in a common data structure MiT (iz, is). The individual data structures M0, Mi,..., Mp of sensor measured values recorded at the times L,..., Tm are in FIGS. 4b to 4d. Each data structure MiT (iz, is) of stored sensor measured values in each case has as many data fields as can be recorded with the area sensor 11 during a recording. In this case, the index iz corresponds to the line index of the respective pixel producing the sensor measured value, and the index is the column index of the sensor pixel 14 which produces the respective sensor measured value.
As shown in FIG. 4b, the sensor readings of the first row of the field of sensor readings of the light field data structure L shown in FIG. 4b are entered at location L (is, 0, 0). The second row of the field M0 of sensor measured values is correspondingly entered in the row L (is, 1,1) of the light field data structure L, etc. The individual sensor measured values M0 (iz, is) at the time t0 with the time index iT = 0 according to the rule L (is, iz, iz): = M0 (is, iz) is entered in the light field data structure, where iz = 0 ... m-1. The last line of field M0 of sensor measured values is entered in line L (is, m-1, m-1) accordingly.
As shown in FIG. 4c, the sensor readings of the first row of the field of sensor readings of the light field data structure L shown in FIG. 4c are entered at location L (is, 0, 1). The second line of the field Mi of sensor measured values is correspondingly entered in the line L (is, 1, 2) of the light field data structure L, etc. The individual sensor measured values M! (iz, is) are entered into the light field data structure at time U with the time index iT = 1 in accordance with the rule L (is, iz, iz + 1): = M-1 (is, iz), where iz = 0 ... m -1. The last line of the field Mi of sensor measured values is entered in line L (is, m-1, m) accordingly.
As shown in Figure 4d, the sensor readings of the first row of the array of sensor readings of the light field data structure L shown in Figure 4d are entered at location L (is, 0, p). The second line of the field Mp of sensor measured values is correspondingly entered into the line L (is, 1, p + 1) of the light field data structure L, etc. The individual sensor measured values Mp (iz, is) are determined at time tp with the time index iT = p according to FIG regulation
entered in the light field data structure, where iz = 0 ... m-1. The last line of the field Mp of sensor readings is correspondingly in the line
entered.
The newly entered in the respective steps lines of the light field data structure L are shown vertically hatched, the already filled lines of the light field data structure L are shown obliquely hatched.
It is also particularly advantageous if, for the storage of the light field, the light field data structure L (is, iz, iT) is designed as a ring memory with respect to the third index iT, in which case light data fields can be continuously generated without writing over the memory limits.
Of course, it is not necessary that every single line be used for the creation of the light field of the object. In some cases, to achieve greater angular resolution of the light field data structure, it is more convenient to use lines spaced apart from one another. In the following described preferred embodiment of the invention, it is therefore provided that not every single line is used for the formation of the light field data structure L. Rather, it is also possible to keep the light field data structure small and save memory to use only selected lines. In the present exemplary embodiment, the selected row selection index is created for identification and assigned to the respectively selected rows. The selection index is assigned consecutively from 1 to p, where p corresponds to the number of selected lines, whereby lines with a smaller selection index are also each assigned a smaller line index.
Typically, it is provided in a simplified manner that a sample value Δ is given, which corresponds to an integer greater than 1. In the present embodiment, the sample value Δ has been set to 8, which means that for the creation of the
Light field data structure every eighth line is used. For precise definition of the lines, an offset value o is specified, which indicates at which position the individual lines are located. The sensor line 12a... 12g with the line index ip = ο + x * Δ is assigned the selection index x. In the present embodiment, o is assigned the value -4. The row with the row index 1 is therefore the row 4 in the following case. The row with the selection index 2 has the row index ip = 12 and so on.
Moreover, it is of course also possible to create a light field data structure whose resolution is inhomogeneous in the direction of the angle, wherein in the region of the center, which typically has a high recording accuracy and low distortion, a larger number of lines is selected and in the edge region, in the possible recording distortions due to the optics consist of a smaller number of lines is selected. In this case, the selection index x is again assigned in ascending order to the rows increasing row by assigning the value 1 to the first selected row having the lowest row index among the selected rows.
Another preferred embodiment of the invention makes it possible to produce a color field data structure L (LR, LG, LB) comprising the respective light field data structures LR, Lg, Lb for each individual color channel separately. The area sensor 11 used for the construction has a Bayer pattern color filter coating for this embodiment. There are a number of columns of alternating green-blue filter arrangement and a number of columns of alternating green-red color filter arrangement. Since a color and brightness value is not available for every color channel, for each color channel, individual values of the light field data structures Lr, Lg, Lb remain undefined. Missing color and brightness information can be obtained by interpolation on columns of alternating green / blue color filter information for adjacent rows according to the following assignment rules.
At a position where there is a pixel with an upstream green filter, the red component is determined as follows:
At a position where there is a pixel with an upstream blue filter, the red component is determined as follows:
Similarly, for columns of alternating green / red color filter array adjacent lines, the missing color and brightness values may be supplemented according to the following mapping rules:
At a position where a pixel with upstream green filter is located, the blue fraction is determined as follows:
At a position where there is a pixel with an upstream red filter, the blue fraction is determined as follows:
As an alternative to the proposed procedure for producing a color field data structure, another prior art method for demosaicing may be used.
权利要求:
Claims (5)
[1]
1. Method for determining the light field of an object (20) by means of an image acquisition unit (10) having an area sensor (11), - wherein the area sensor (11) comprises a number of sensor pixels arranged in rows in rows (12) and columns (13) (14), which according to their arrangement in each case a row index (iz) and a column index (is) is assigned, - wherein the object (1) and the image pickup unit (4) a translational relative movement parallel to the direction of the columns (13) of the surface sensor in which the object (20) is moved relative to the image recording unit (10) on an object plane (21) spaced apart from the image recording unit (10) by a predetermined distance (d), the object (20) passing through the recording region of the image recording unit (10). 10), and wherein a plurality of single-line images of a particular object line are taken at different times (L, ..., tm) and the angle to the surface normal for the individual line images is different in each case, a subregion comprising a number of lines (12a... 12g), in particular all lines, of the area sensor 11 being selected and the selected lines (12a. in each case a selection index x is assigned, wherein the selection index x is assigned in each case increasingly to the lines with increasing line index (iz), wherein the selection index x of the first selected line with the lowest line index has the value 1 among the selected lines and the selection index x the selected one Line with the highest line index among the selected lines corresponds to the value p of the number of lines selected. characterized in that - for a light field of the object (20) corresponding three-dimensional nxmx (p + m -1) light field data structure L (in, im, iP) is created whose entries are each color or brightness values, - where n the length of the lines (12) corresponds, - where m is the size, in number of rows, of a selected region (13), - that at superior times (t1, ..., tm) with a time index (iT = 1,. .., m) with the sensor pixels (14) of the selected line sensors sensor readings MiT (iP, is) are respectively created, each sensor measured value MiT (ip, is) respectively the time index (iT) of its recording and the selection index x (iP) the Line and the column index (is) of the sensor measured value sensor pixel (14) are assigned, and - that the determined sensor measured values MiT (x (ip), is) are entered in the light field data structure (L) according to the following rule:


[2]
2. The method according to claim 1, characterized in that the line spacing (d) between each two adjacent recording times (t, ti + i), at which sensor measured values (M) are determined, is set to the shift in which the The article (20) in each case by the distance of each two adjacent observations (22a, 22b, 22c, ...) has moved in the transport direction (R).
[3]
3. The method according to claim 1 or 2, characterized in that the light field data structure is designed as a ring memory of size n x m x (p + m - 1), wherein each entry in the ring memory is either a brightness value or a color value triple.
[4]
4. The method according to any one of the preceding claims, characterized in that a distance value Δ is set as an integer having a value greater than 1, each Δ-th sensor line (12a ... 12g) is selected and the sensor line (12a .. .12g) is associated with the row index ίΡ = ο + χ * Δ the selection index (x), where o corresponds to an integer offset value.
[5]
5. The method according to claim 1, characterized in that a color light field data structure

(An area sensor (11) with color filter coating according to the Bayer pattern is generated, wherein each separate color field structure (LR, LG, Lb) is generated separately for each color with each available for this color pixels, with individual entries in the absence of color and brightness information remain undefined and subsequently for columns with alternating green / blue filter arrangement, missing color information

is determined, and in the case that the respective position is provided with a green filter, the proportion LR of the red light is determined as follows,

2, and in the case that the respective position is provided with a blue filter, the proportion LR of the red light is determined as follows:



and for columns with alternating green / red filter arrangement, missing color information according to

is determined, and in the case that the respective position is provided with a green filter, the proportion LB of the blue light is determined as follows

and in the case that the respective position is provided with a red filter, the proportion LB of the blue light is determined as follows

类似技术:
公开号 | 公开日 | 专利标题
EP2596642B1|2017-10-25|Apparatus and method for image recording
EP3085070B1|2020-07-15|Multichannel optics image capture apparatus
EP2676241B1|2016-04-06|Method and system for determining a number of transitional objects
EP3066830A1|2016-09-14|Multi-aperture device and method for detecting an object region
DE112015005073T5|2017-08-03|An imaging system comprising a longitudinal chromatic aberration lens, endoscope and imaging method
DE112011103452T5|2013-08-14|Method for matching pixels of a distance representation
EP2903264B1|2018-08-15|Method for determining the light field of an object
DE10212364A1|2003-10-16|Method and device for determining the absolute coordinates of an object
DE2260086A1|1974-06-12|OPTICAL CORRELATOR
EP1686416A1|2006-08-02|Method and device for capturing an image, in particular by means of a CCD sensor
EP2902963B1|2016-10-05|Method for producing an image of an object
DE102019008472A1|2020-07-23|Multi-lens camera system and method for hyperspectral recording of images
AT512220B1|2015-03-15|METHOD AND A RECORDING APPARATUS FOR RECORDING MULTIPLE PICTURES
DE102013200059B4|2018-05-17|Device for recording an intermediate image and plenoptic camera generated by a main lens of a plenoptic camera
DE112014000874T5|2015-11-26|Electronic device, method for generating an image and filter arrangement
DE102006018315A1|2007-11-08|Spectrally resolved image sequence producing arrangement for digital camera system, has strips of color filters aligned in right angle to line length of monochrome sensor array, where color filters are arranged in strip shape
DE102017120648A1|2019-03-07|3D display element, 3D display system, method for operating a 3D display element and method for operating a 3D display system
EP3062290A1|2016-08-31|Method for determining an image of an object
DE102019101324A1|2020-07-23|Multi-lens camera system and method for hyperspectral recording of images
DE102019109147A1|2020-10-08|PICTURE ACQUISITION SYSTEM AND METHOD OF CAPTURING PICTURES
DE102018107363B4|2020-09-10|Method and sensor for generating 3D image data
DE102016212778A1|2018-01-18|Image sensor for a camera
DE102015011674A1|2016-03-17|Method for depth filtering of depth images
AT520839A4|2019-08-15|Method for creating a picture stack data structure
DE102010046391A1|2012-03-29|Optical system for remote sensing camera for capturing image, has common lens attached to two time delay integration-line-sensors, where speed ratios of integration-line-sensors are adjusted to different object distances
同族专利:
公开号 | 公开日
EP2903264A1|2015-08-05|
EP2903264B1|2018-08-15|
AT515318B1|2015-08-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
EP2083447A1|2008-01-28|2009-07-29|Sony Corporation|Image pickup apparatus|
US20130057749A1|2011-08-29|2013-03-07|Canon Kabushiki Kaisha|Image pickup apparatus|
US20130329117A1|2012-06-11|2013-12-12|Canon Kabushiki Kaisha|Image capture apparatus|
WO2009115108A1|2008-03-19|2009-09-24|Ruprecht-Karls-Universität Heidelberg|A method and an apparatus for localization of single dye molecules in the fluorescent microscopy|AT517868A1|2015-11-05|2017-05-15|Ait Austrian Inst Technology|Method for determining the spatial reflection behavior of individual object points|
EP3661191A1|2018-11-27|2020-06-03|B&R Industrial Automation GmbH|Method for reading out an area sensor|
法律状态:
优先权:
申请号 | 申请日 | 专利标题
ATA50072/2014A|AT515318B1|2014-01-31|2014-01-31|Method for determining the light field of an object|ATA50072/2014A| AT515318B1|2014-01-31|2014-01-31|Method for determining the light field of an object|
EP14455007.6A| EP2903264B1|2014-01-31|2014-12-17|Method for determining the light field of an object|
[返回顶部]